Pre-training large transformer models with in-domain data improves domain adaptation and helps gain performance on the domain-specific downstream tasks. However, sharing models pre-trained on potentially sensitive data is prone to adversarial privacy attacks. In this paper, we asked to which extent we can guarantee privacy of pre-training data and, at the same time, achieve better downstream performance on legal tasks without the need of additional labeled data. We extensively experiment with scalable self-supervised learning of transformer models under the formal paradigm of differential privacy and show that under specific training configurations we can improve downstream performance without sacrifying privacy protection for the in-domain data. Our main contribution is utilizing differential privacy for large-scale pre-training of transformer language models in the legal NLP domain, which, to the best of our knowledge, has not been addressed before.
translated by 谷歌翻译
We present a new NLP task and dataset from the domain of the U.S. civil procedure. Each instance of the dataset consists of a general introduction to the case, a particular question, and a possible solution argument, accompanied by a detailed analysis of why the argument applies in that case. Since the dataset is based on a book aimed at law students, we believe that it represents a truly complex task for benchmarking modern legal language models. Our baseline evaluation shows that fine-tuning a legal transformer provides some advantage over random baseline models, but our analysis reveals that the actual ability to infer legal arguments remains a challenging open research question.
translated by 谷歌翻译
临床NLP任务,例如文本的心理健康评估,必须考虑社会限制 - 绩效最大化必须受保证用户数据隐私的最大重要性来限制。消费者保护法规(例如GDPR)通常通过限制数据可用性来处理隐私,例如要求将用户数据限制为给定目的的“必要内容”。在这项工作中,我们认为提供更严格的正式隐私保证,同时增加模型中用户数据量的同时,在大多数情况下,为所有涉及的各方(尤其是对用户)增加了收益。我们在Twitter和Reddit帖子的两个现有自杀风险评估数据集上演示了我们的论点。我们提出了第一个分析并置用户历史记录长度和差异隐私预算,并详细说明建模其他用户上下文如何实现公用事业保存,同时保持可接受的用户隐私保证。
translated by 谷歌翻译
具有差异隐私(DP)的文本重写提供了具体的理论保证,可以保护个人在文本文档中的隐私。实际上,现有系统可能缺乏验证其隐私索赔的手段,从而导致透明度和可重复性问题。我们介绍了DP-Rewrite,这是一个开源框架,用于差异化文本重写,旨在通过模块化,可扩展和高度定制来解决这些问题。我们的系统结合了各种下游数据集,模型,培训前程序和评估指标,以提供一种灵活的方式来领导和验证私人文本重写研究。为了在实践中展示我们的软件,我们提供了一组实验,作为对熟练DP文本重写系统的案例研究,检测其预训练方法中的隐私泄漏。我们的系统公开可用,我们希望它将帮助社区使DP文本重写研究更容易访问和透明。
translated by 谷歌翻译
自论证挖掘领域成立以来,在法律话语中识别,分类和分析的论点一直是研究的重要领域。但是,自然语言处理(NLP)研究人员的模型模型与法院决策中的注释论点与法律专家理解和分析法律论证的方式之间存在重大差异。尽管计算方法通常将论点简化为通用的前提和主张,但法律研究中的论点通常表现出丰富的类型,对于获得一般法律的特定案例和应用很重要。我们解决了这个问题,并做出了一些实质性的贡献,以推动该领域的前进。首先,我们在欧洲人权法院(ECHR)诉讼中为法律论点设计了新的注释计划,该计划深深植根于法律论证研究的理论和实践中。其次,我们编译和注释了373项法院判决(230万令牌和15K注释的论点跨度)的大量语料库。最后,我们训练一个论证挖掘模型,该模型胜过法律NLP领域中最先进的模型,并提供了彻底的基于专家的评估。所有数据集和源代码均可在https://github.com/trusthlt/mining-legal-arguments的开放lincenses下获得。
translated by 谷歌翻译
在培训现代NLP模型中保留隐私费用。我们知道更严格的隐私保证在差异私有的随机梯度下降(DP-SGD)中通常会降低模型性能。然而,先前关于NLP中DP-SGD效率的研究是不确定的甚至反直观的。在这篇简短的论文中,我们在五个不同的“典型的”NLP任务中,对七个下游数据集进行了彻底分析了七个下游数据集,其使用现代神经模型不同的复杂性。我们表明,与解决NLP任务的标准非私人方法不同,在那里更大通常更好,隐私保存策略不会表现出胜利模式,每个任务和隐私制度都需要特殊的待遇来实现足够的绩效。
translated by 谷歌翻译
Machine learning models are typically evaluated by computing similarity with reference annotations and trained by maximizing similarity with such. Especially in the bio-medical domain, annotations are subjective and suffer from low inter- and intra-rater reliability. Since annotations only reflect the annotation entity's interpretation of the real world, this can lead to sub-optimal predictions even though the model achieves high similarity scores. Here, the theoretical concept of Peak Ground Truth (PGT) is introduced. PGT marks the point beyond which an increase in similarity with the reference annotation stops translating to better Real World Model Performance (RWMP). Additionally, a quantitative technique to approximate PGT by computing inter- and intra-rater reliability is proposed. Finally, three categories of PGT-aware strategies to evaluate and improve model performance are reviewed.
translated by 谷歌翻译
A "heart attack" or myocardial infarction (MI), occurs when an artery supplying blood to the heart is abruptly occluded. The "gold standard" method for imaging MI is Cardiovascular Magnetic Resonance Imaging (MRI), with intravenously administered gadolinium-based contrast (late gadolinium enhancement). However, no "gold standard" fully automated method for the quantification of MI exists. In this work, we propose an end-to-end fully automatic system (MyI-Net) for the detection and quantification of MI in MRI images. This has the potential to reduce the uncertainty due to the technical variability across labs and inherent problems of the data and labels. Our system consists of four processing stages designed to maintain the flow of information across scales. First, features from raw MRI images are generated using feature extractors built on ResNet and MoblieNet architectures. This is followed by the Atrous Spatial Pyramid Pooling (ASPP) to produce spatial information at different scales to preserve more image context. High-level features from ASPP and initial low-level features are concatenated at the third stage and then passed to the fourth stage where spatial information is recovered via up-sampling to produce final image segmentation output into: i) background, ii) heart muscle, iii) blood and iv) scar areas. New models were compared with state-of-art models and manual quantification. Our models showed favorable performance in global segmentation and scar tissue detection relative to state-of-the-art work, including a four-fold better performance in matching scar pixels to contours produced by clinicians.
translated by 谷歌翻译
Graph neural networks (GNN) have become the default machine learning model for relational datasets, including protein interaction networks, biological neural networks, and scientific collaboration graphs. We use tools from statistical physics and random matrix theory to precisely characterize generalization in simple graph convolution networks on the contextual stochastic block model. The derived curves are phenomenologically rich: they explain the distinction between learning on homophilic and heterophilic graphs and they predict double descent whose existence in GNNs has been questioned by recent work. Our results are the first to accurately explain the behavior not only of a stylized graph learning model but also of complex GNNs on messy real-world datasets. To wit, we use our analytic insights about homophily and heterophily to improve performance of state-of-the-art graph neural networks on several heterophilic benchmarks by a simple addition of negative self-loop filters.
translated by 谷歌翻译
In this paper, we propose a new neural network architecture based on the H2 matrix. Even though networks with H2-inspired architecture already exist, and our approach is designed to reduce memory costs and improve performance by taking into account the sparsity template of the H2 matrix. In numerical comparison with alternative neural networks, including the known H2-based ones, our architecture showed itself as beneficial in terms of performance, memory, and scalability.
translated by 谷歌翻译